Neuro-fuzzy Systems D1.3 Neuro-fuzzy algorithms

نویسندگان

  • Krzysztof J Cios
  • Witold Pedrycz
چکیده

See the abstract for Chapter D1. Relatively early in neural network research there emerged an interest in analyzing and designing layered, feedforward networks augmented by some formalism stemming from the theory of fuzzy sets. One of B2.3 the first approaches was the fuzzification of the binary McCulloch–Pitts neuron (Lee and Lee 1975). B1.2 Then, several researchers looked at a typical feedforward neural network architecture and analyzed several combinations of such neurons with fuzzy sets viewed as inputs to the neural network. Similarly, the networks were equipped with connections (weights) viewed as fuzzy sets with triangular membership functions. Interestingly, in all these cases, the outputs of the network were kept numerical. Some representative examples include the work of Yamakawa and Tomoda (1989), O’Hagan (1991), Gupta and Qi (1991), Hayashi et al (1992), and Ishibushi et al (1992). Commonly, these authors employed fuzzy sets with either triangular or trapezoidal membership functions. The training was accomplished utilizing a standard delta rule. In some other cases (Hayashi et al 1992) a fuzzified delta rule was used. B3.3.3 The delta rule was also replaced by other algorithms, for instance Requena and Delgado (1992) used a Boltzmann machine training. C1.4 D1.3.1 Fuzzy inference schemes and their realizations as neural networks In the following, we briefly review a certain category of fuzzy inference systems also known as fuzzy associative memories (Kosko 1993). This form of memory is often regarded as central to the C1.3, F1.4 implementation of fuzzy-rule-based systems, and, in general, fuzzy systems (Wang and Mendel 1992). Fuzzy associative memory (FAM) consists of a fuzzifier, fuzzy rule base, fuzzy inference engine, and a defuzzifier. They are static transformations which map input fuzzy sets into output fuzzy sets (Kosko 1993). It carries out a mapping between unit hypercubes. The role of the fuzzifier and defuzzifier is to form a suitable interface between the transformation and the external environment in which modeling is completed. The transformation is based on a set of fuzzy rules, namely rules consisting of fuzzy predicates and reflecting a domain knowledge and usually originating from human experts. This type of knowledge may pertain to some general control policies, linguistic description of systems etc. As will be revealed later on, the knowledge gained from such sources can substantially enhance learning in neural networks by reducing their training time. The development of a FAM is realized in several steps which are summarized as follows (Kosko 1993). First, we identify the variables of the system and encode them linguistically in terms of fuzzy sets such as small, medium and big. The second step is to associate these fuzzy sets by constructing rules (if–then statements) of the general form: if X is A then Y is B where X and Y are system variables, usually referred to as linguistic variables, while fuzzy sets A and B are represented by their corresponding membership functions. Usually each typical application requires from several to many rules of the form given above—their number is implied by the granularity of the fuzzy information captured by the rules. Thus, the rules can be written as: if X is Ak then Y is Bk . c © 1997 IOP Publishing Ltd and Oxford University Press Handbook of Neural Computation release 97/1 D1.3:1 Neuro-fuzzy algorithms As said before, each rule forms a partial mapping from input space X into output space Y, which can be written in the form of a fuzzy relation or, more precisely, a Cartesian product of A and B, namely R(x, y) = min(A(x), B(y)) where x ∈ X, y ∈ Y and A(x) and B(x) are grades of membership of x and y in fuzzy sets A and B, respectively. In the third step we need to decide upon an inference mechanism, used for drawing inferences from a given piece of information and the available rules. The inference mechanism embodies two keys steps (Pedrycz 1993, 1995): (i) Aggregation of rules. This summarization of the rules is almost always done by taking a union of the individual rules. As such, the aggregation of N rules leads to a fuzzy relation of the form R = N ⋃ k=1 (Ak × Bk) . (ii) Producing a fuzzy set from given A and R. The classic mechanism used here is a max–min operation yielding the expression B = A◦R namely, B(y) = sup x∈X [min(A(x), R(x, y))] y ∈ Y. Because of the nature of fuzzy sets no perfect match is required to fire, or activate, a particular rule as is the case when using rules not including linguistic terms. Finally, although the employed inference strategy will determine the output in a form of a fuzzy set, most of the time a user is interested in a crisp or single value at the output as required in most, if not all, current applications. To achieve that, one needs to use one of several defuzzification techniques. One quite often used is the transformation exploiting a weighted sum of the modal values of the fuzzy sets of conclusion. This gives rise to the expression z = ∑N k=1 λkb ∗ k ∑N k=1 λk where λk is the level of activation or possibility measure of the antecedent of the kth rule with λk = sup x∈X [min(A(x), Ak(x))] where b∗ k is a modal value of Bk , namely Bk(b ∗ k ) = max y∈Y Bk(y) . Two features of FAMs are worth emphasizing when analyzing their memorization and recall capabilities. They are very similar to those encountered in correlation-based associative memories: (i) The learning process is straightforward and instantaneous—in fact FAMs do not require any learning. This could be regarded as an evident advantage but it comes at the expense of a fairly low capacity and potential crosstalk distortions. (ii) This crosstalk in the memory can be avoided for some carefully selected items to be stored. In particular, if all input items Ak are pairwise-disjoint normal fuzzy sets, Ak ∩Al = ∅ for all k, l = 1, 2, . . ., N, k 6= 1, then Bk = Ak◦R, k = 1, 2, . . . , N , meaning a perfect recall. The functional summary of the FAM system which outlines its main components is shown in figure D1.3.1. Wang (1992) proved that a fuzzy inference system that is equipped with the max-product composition with scaled Gaussian membership functions is a universal approximator. Let us recall that the main idea of universal approximation states that any continuous function f : R → R, can be approximated using a neural network to any degree of accuracy on a compact subset of R (Hornik et al 1989). The above described FAM system is often utilized as part of a so-called bidirectional associative C1.3.2 memory (BAM). The applications of it can be found in control tasks such as the inverted pendulum (Kosko 1993). c © 1997 IOP Publishing Ltd and Oxford University Press Handbook of Neural Computation release 97/1 D1.3:2 Neuro-fuzzy algorithms Crisp input Crisp output Fuzzifier Defuzzifier Fuzzy rules and fuzzy inference mechanism Figure D1.3.1. The architecture of the FAM system. D1.3.1.1 Fuzzy backpropagation The fuzzy backpropagation algorithm (Xu et al 1992) exploits fuzzy rules for adjusting the activation function and learning rate. By coding the heuristic knowledge about the behavior of the standard backpropagation training Xu et al (1992) were able to considerably shorten the time required to train the network, which too often is prohibitive for any real problem. It should be noted that long training times for backpropagation algorithms arise mainly from keeping both the learning rate and the activation function fixed. Selection of the proper learning rate and ‘optimal’ activation function in backpropagation algorithms had been studied before (Weir 1991, Silva and Almeida 1990, Rumelhart and McLelland 1986); however, the two parameters were not studied in unison. Rapid minimalization of the training error, e, by proper simultaneous selection of the learning rate, c(e, t), and of the steepness of the activation function, s(e, t, neti ), where t is time and neti is the input to the activation function were proposed by Xu et al (1992). As is the most common case, the weights of the network in the backpropagation algorithm are adjusted by using the gradient-descent method according to wji(t + 1) = wji(t) − c(e, t) ∂e ∂wji where [wji] represents the weight matrix associated with connections between the neurons and utilizes the following activation function: s(e, t, neti ) = 1/[1 + exp(−σ(e, t)neti )] . The activation function, s, is modified by adjusting its steepness factor, σ(e, t), as illustrated in figure D1.3.2. –3 –2 –1 1 2 3 10.0 1

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Voting Algorithm Based on Adaptive Neuro Fuzzy Inference System for Fault Tolerant Systems

some applications are critical and must designed Fault Tolerant System. Usually Voting Algorithm is one of the principle elements of a Fault Tolerant System. Two kinds of voting algorithm are used in most applications, they are majority voting algorithm and weighted average algorithm these algorithms have some problems. Majority confronts with the problem of threshold limits and voter of weight...

متن کامل

Voting Algorithm Based on Adaptive Neuro Fuzzy Inference System for Fault Tolerant Systems

some applications are critical and must designed Fault Tolerant System. Usually Voting Algorithm is one of the principle elements of a Fault Tolerant System. Two kinds of voting algorithm are used in most applications, they are majority voting algorithm and weighted average algorithm these algorithms have some problems. Majority confronts with the problem of threshold limits and voter of weight...

متن کامل

Neuro-Fuzzy Systems From The Neural Network Perspective

The natural development of hybrid techniques causes biases with their roots in di erent technologies, in this case either in fuzzy systems or in neural networks. The neuro-fuzzy research is discussed in this paper giving examples and emphasising the neural network perspective. Introduction of new fuzzy systems models and the development of new neural learning algorithms could be observed in the...

متن کامل

Hierarchical Neuro-Fuzzy Systems Part I

Neuro-fuzzy [Jang,1997][Abraham,2005] are hybrid systems that combine the learning capacity of neural nets [Haykin,1999] with the linguistic interpretation of fuzzy inference systems [Ross,2004]. These systems have been evaluated quite intensively in machine learning tasks. This is mainly due to a number of factors: the applicability of learning algorithms developed for neural nets; the possibi...

متن کامل

An indirect adaptive neuro-fuzzy speed control of induction motors

This paper presents an indirect adaptive system based on neuro-fuzzy approximators for the speed control of induction motors. The uncertainty including parametric variations, the external load disturbance and unmodeled dynamics is estimated and compensated by designing neuro-fuzzy systems. The contribution of this paper is presenting a stability analysis for neuro-fuzzy speed control of inducti...

متن کامل

Comparing diagnosis of depression in depressed patients by EEG, based on two algorithms :Artificial Nerve Networks and Neuro-Fuzy Networks

Background and aims: Depression disorder is one of the most common diseases, but the diagnosis is widely complicated and controversial because of interventions, overlapping and confusing nature of the disease. So, keeping previous patients’ profile seems effective for diagnosis and treatment of present patients. Use of this memory is latent in synthetic neuro-fuzzy algorithm. P...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1996